超参数优化是机器学习中的一个重要问题,因为它旨在在任何模型中实现最先进的性能。在这一领域取得了巨大努力,例如随机搜索,网格搜索,贝叶斯优化。在本文中,我们将超参数优化过程模拟为马尔可夫决策过程,并用加强学习解决它。提出了一种基于软演员评论家的新型超参数优化方法和分层混合阵列。实验表明,所提出的方法可以在较短的时间内获得更好的超参数。
translated by 谷歌翻译
本文提出了一项新的统计分析,旨在解释自然语言处理(NLP)中训练技术的最新成就。我们证明,当预训练任务的类(例如,蒙版语言模型任务中的不同单词)的类别足够多样化,从某种意义上说,最后一个线性层的最小奇异值在预训练中(表示为$ \ \ \ \ \ Tilde {\ nu} $)很大,然后预训练可以显着提高下游任务的样本效率。特别是,我们显示转移学习过量风险享受$ o \ left(\ frac {1} {\ tilde {\ nu} \ sqrt {n}} \ right)$ rate,与$ o \ left相比(\)标准监督学习中的frac {1} {\ sqrt {m}} \ right)$ rate。在这里,$ n $是预训练数据的数量,$ m $是下游任务中的数据数,通常是$ n \ gg m $。我们的证明依赖于矢量形式的rademacher复杂性链规则来拆卸复合函数类别和修改的自我符合条件。这些技术可能具有独立的兴趣。
translated by 谷歌翻译
在表演性预测中,一个预测模型会影响生成未来数据的分布,这一现象在经典监督学习中被忽略。在这种闭环环境中,表明性能的自然衡量标准捕获了部署后预测模型产生的预期损失。最小化性能风险的核心难度是数据分布本身取决于模型参数。这种依赖性受环境的约束,而不是在学习者的控制之下。结果,即使选择凸损耗函数也可能导致高度非凸性的性能最小化问题。先前的工作已经确定了一对关于损失的一般条件以及从模型参数到分布的映射,这意味着表现风险的凸度。在本文中,我们放宽了这些假设,并专注于获得较弱的凸度概念,而无需牺牲迭代优化方法的表演风险最小化问题的舒适性。
translated by 谷歌翻译
基于大/基础模型的最新突破揭示了人工智能的模糊大道,即出价数据,大/基础模型,大型学习,$ \ cdots $。遵循该大道,我们在这里详细介绍了新引入的大型学习。具体而言,大型学习通过同时学习为一个通用基础模型建模大规模完整/不完整数据中固有的可用信息,同时学习建模多个与所有关节/条件/边际数据分布(因此命名为大型学习)。我们透露,大型学习是现有的基础模型隐含的工作。因此,我们的大型学习为灵活的设计和基础模型的改进提供了高级指导,从而加快了互联网上真正的自学学习。此外,Big Learning($ i $)还具有奇妙的灵活性,用于培训数据和培训任务定制; ($ ii $)培训后可能会提供所有联合/条件/边际数据功能; ($ iii $)通过改进的模型概括大大减少了训练测试差距; ($ iv $)统一传统的机器学习范式,例如监督学习,无监督的学习,生成学习等,并使他们的灵活合作表现出了普遍的学习范式。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Blind image quality assessment (BIQA) remains challenging due to the diversity of distortion and image content variation, which complicate the distortion patterns crossing different scales and aggravate the difficulty of the regression problem for BIQA. However, existing BIQA methods often fail to consider multi-scale distortion patterns and image content, and little research has been done on learning strategies to make the regression model produce better performance. In this paper, we propose a simple yet effective Progressive Multi-Task Image Quality Assessment (PMT-IQA) model, which contains a multi-scale feature extraction module (MS) and a progressive multi-task learning module (PMT), to help the model learn complex distortion patterns and better optimize the regression issue to align with the law of human learning process from easy to hard. To verify the effectiveness of the proposed PMT-IQA model, we conduct experiments on four widely used public datasets, and the experimental results indicate that the performance of PMT-IQA is superior to the comparison approaches, and both MS and PMT modules improve the model's performance.
translated by 谷歌翻译
Automatic music generation with artificial intelligence typically requires a large amount of data which is hard to obtain for many less common genres and musical instruments. To tackle this issue, we present ongoing work and preliminary findings on the possibility for deep models to transfer knowledge from language to music, by finetuning large language models pre-trained on a massive text corpus on only hundreds of MIDI files of drum performances. We show that by doing so, one of the largest, state-of-the-art models (GPT3) is capable of generating reasonable drum grooves, while models that are not pre-trained (Transformer) shows no such ability beyond naive repetition. Evaluating generated music is a challenging task, more so is evaluating drum grooves with little precedence in literature. Hence, we propose a tailored structural evaluation method and analyze drum grooves produced by GPT3 compared to those played by human professionals, exposing the strengths and weaknesses of such generation by language-to-music transfer. Our findings suggest that language-to-music transfer learning with large language models is viable and promising.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译